Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning (ML) fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing the potential interplay between AI and xenophobia on social media and recommendation systems, healthcare, immigration, employment, as well as biases in large pre-trained models. These help inform our recommendations towards an inclusive, xenophilic design of future AI systems.
translated by 谷歌翻译
We introduce a challenging decision-making task that we call active acquisition for multimodal temporal data (A2MT). In many real-world scenarios, input features are not readily available at test time and must instead be acquired at significant cost. With A2MT, we aim to learn agents that actively select which modalities of an input to acquire, trading off acquisition cost and predictive performance. A2MT extends a previous task called active feature acquisition to temporal decision making about high-dimensional inputs. Further, we propose a method based on the Perceiver IO architecture to address A2MT in practice. Our agents are able to solve a novel synthetic scenario requiring practically relevant cross-modal reasoning skills. On two large-scale, real-world datasets, Kinetics-700 and AudioSet, our agents successfully learn cost-reactive acquisition behavior. However, an ablation reveals they are unable to learn to learn adaptive acquisition strategies, emphasizing the difficulty of the task even for state-of-the-art models. Applications of A2MT may be impactful in domains like medicine, robotics, or finance, where modalities differ in acquisition cost and informativeness.
translated by 谷歌翻译
机器学习(ML)具有改善医疗保健的巨大希望,但至关重要的是要确保其使用不会传播或扩大健康差异。一个重要的步骤是表征ML模型的(联合国)公平性 - 它们在人群的亚组中的表现趋势不同,并了解其潜在机制。当ML模拟培训数据中不正确相关性的基本预测时,就会出现算法不公平,快捷学习的潜在驱动力。但是,诊断这种现象很困难,尤其是当敏感属性与疾病有因果关系时。使用多任务学习,我们提出了第一种评估和减轻快捷方式学习的方法,作为临床ML系统公平评估的一部分,并证明了其在放射学和皮肤病学中的临床任务中的应用。最后,我们的方法揭示了捷径对不公平不公平负责的情况,强调了对医疗AI中的公平缓解的必要性。
translated by 谷歌翻译
尽管最近通过剩余网络的代表学习中的自我监督方法取得了进展,但它们仍然对ImageNet分类基准进行了高度的监督学习,限制了它们在性能关键设置中的适用性。在MITROVIC等人的现有理论上洞察中建立2021年,我们提出了RELICV2,其结合了明确的不变性损失,在各种适当构造的数据视图上具有对比的目标。 Relicv2在ImageNet上实现了77.1%的前1个分类准确性,使用线性评估使用Reset50架构和80.6%,具有较大的Reset型号,优于宽边缘以前的最先进的自我监督方法。最值得注意的是,RelicV2是使用一系列标准Reset架构始终如一地始终优先于类似的对比较中的监督基线的第一个表示学习方法。最后,我们表明,尽管使用Reset编码器,Relicv2可与最先进的自我监控视觉变压器相媲美。
translated by 谷歌翻译
我们介绍了一个新的真实值不变,称为3范围内的双曲结的自然斜率,这在其CUSP几何形状中定义。我们展示了两倍的结签名,自然斜率在大多数恒定时间上不同的双曲线除以喷射率半径的立方体。使用机器学习发现这种不等式来检测各种结不变之间的关系。它有应用于Dehn手术和4球属的应用。我们还显示了一个精致版本的不等式,其中上限是体积的线性函数,并且斜率通过对应于链接结的短测地测量的术语来校正,该术语将结奇数次数。
translated by 谷歌翻译
我们应用磁共振波谱分析(MRS)深度学习(DL)数据的脑肿瘤检测的任务。医疗方面的应用往往是噪声数据匮乏和腐败困扰。这两个问题是在我们的数据集突出。此外,不同数量的光谱的可用于不同的患者。我们考虑的任务作为多实例学习(MIL)问题,解决这些问题。具体来说,我们聚集来自同一患者的多个光谱成“袋”用于分类和应用数据的增强技术。为了实现装袋的过程中,置换不变性,我们提出了两种方法:(1)申请MIN-,MAX-,和平均汇集所有样本在一个袋子和(2)的功能应用的注意机制。我们测试了多个神经网络结构这两种方法。我们证明上的多个实例的训练,而不是单一的光谱时分类性能显著提高。我们提出了一个简单的过采样数据隆胸方法,并表明它可以进一步提高性能。最后,我们证明了我们提出的模型优于根据大多数性能指标由神经放射学手工分类。
translated by 谷歌翻译
Superhuman神经网络代理如alphazero是什么?这个问题是科学和实际的兴趣。如果强神经网络的陈述与人类概念没有相似之处,我们理解他们的决定的忠实解释的能力将受到限制,最终限制了我们可以通过神经网络解释来实现的。在这项工作中,我们提供了证据表明,人类知识是由alphapero神经网络获得的,因为它在国际象棋游戏中列车。通过探究广泛的人类象棋概念,我们在alphazero网络中显示了这些概念的时间和地点。我们还提供了一种关注开放游戏的行为分析,包括来自国际象棋Grandmaster Vladimir Kramnik的定性分析。最后,我们开展了初步调查,观察alphazero的表现的低级细节,并在线提供由此产生的行为和代表性分析。
translated by 谷歌翻译
我们研究了将符号距离界限转换为多边形网格的渐近快速方法。这是通过将球形追踪(也称为光线游行)和传统的多边形方案之一(例如,行进立方体)组合来实现的。让我们称之为这个方法网格普通。我们提供的理论和实验证据表明,它是$ O(n ^ 2 \ log n)$ o的多边形网格的$ o(n ^ 2 \ log n),具有$ n ^ 3 $细胞。通过机器学习从点云生成的一组原始形状以及从点云生成的符号距离字段进行测试。鉴于其速度,简单性和可移植性,我们认为它可以在建模阶段和形状压缩过程中证明是有用的。该代码可在此处提供:https://github.com/nenadmarkus/gridhopping
translated by 谷歌翻译